block code
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
Hyper-Graph-Network Decoders for Block Codes
Neural decoders were shown to outperform classical message passing techniques for short BCH codes. In this work, we extend these results to much larger families of algebraic block codes, by performing message passing with graph neural networks. The parameters of the sub-network at each variable-node in the Tanner graph are obtained from a hypernetwork that receives the absolute values of the current message as input. To add stability, we employ a simplified version of the arctanh activation that is based on a high order Taylor approximation of this activation function. Our results show that for a large number of algebraic block codes, from diverse families of codes (BCH, LDPC, Polar), the decoding obtained with our method outperforms the vanilla belief propagation method as well as other learning techniques from the literature.
- Asia > Middle East > Israel > Tel Aviv District > Tel Aviv (0.04)
- North America > Canada (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
On the Optimality of Single-label and Multi-label Neural Network Decoders
Gültekin, Yunus Can, Scheepers, Péter, Yuan, Yuncheng, Corradi, Federico, Alvarado, Alex
We investigate the design of two neural network (NN) architectures recently proposed as decoders for forward error correction: the so-called single-label NN (SLNN) and multi-label NN (MLNN) decoders. These decoders have been reported to achieve near-optimal codeword- and bit-wise performance, respectively. Results in the literature show near-optimality for a variety of short codes. In this paper, we analytically prove that certain SLNN and MLNN architectures can, in fact, always realize optimal decoding, regardless of the code. These optimal architectures and their binary weights are shown to be defined by the codebook, i.e., no training or network optimization is required. Our proposed architectures are in fact not NNs, but a different way of implementing the maximum likelihood decoding rule. Optimal performance is numerically demonstrated for Hamming $(7,4)$, Polar $(16,8)$, and BCH $(31,21)$ codes. The results show that our optimal architectures are less complex than the SLNN and MLNN architectures proposed in the literature, which in fact only achieve near-optimal performance. Extension to longer codes is still hindered by the curse of dimensionality. Therefore, even though SLNN and MLNN can perform maximum likelihood decoding, such architectures cannot be used for medium and long codes.
- Europe > Netherlands > North Brabant > Eindhoven (0.05)
- North America > United States > Louisiana > Orleans Parish > New Orleans (0.04)
- Europe > Germany > Baden-Württemberg > Karlsruhe Region > Karlsruhe (0.04)
- (5 more...)
Reviews: Hyper-Graph-Network Decoders for Block Codes
In this paper, the authors propose to use a fully-connected NN to improve the BP decoding for block codes of regular degree distribution. The results are quite interesting because it shows that we can do better than BP for these regular codes by weighting the different contributions coming from the parity check. In a way, it tells each bit which parity check should trust more when doing each BP step and it allows the modified BP algorithm to converge faster and more accurately to the right code word. The gains are marginal, but given how good BP typically is that should not come as a surprise and should not be held against the paper. I have several comments about the paper that I would like to be addressed in the final version.
Reviews: Hyper-Graph-Network Decoders for Block Codes
This paper proposes a neural-network-based decoder architecture binary linear block codes with constant-degree variable nodes. It is based on message passing on the unfolded Tanner graph but replaces the variable-node operation in each iteration with a neural network g, whose parameters are provided by another neural network f which takes the absolute values of the messages as its input. Experimental results are provided to demonstrate that the proposed scheme performs well for various different types of codes. Although the review scores were around the acceptance threshold in the initial round of review, after the authors' rebuttal two reviewers have raised their scores, so that now all the reviewers are positive. I would thus like to recommend acceptance of this paper.
5G LDPC Linear Transformer for Channel Decoding
Hernandez, Mario, Pinero, Fernando
This work introduces a novel, fully differentiable linear-time complexity transformer decoder and a transformer decoder to correct 5G New Radio (NR) LDPC. We propose a scalable approach to decode linear block codes with $O(n)$ complexity rather than $O(n^2)$ for regular transformers. The architectures' performances are compared to Belief Propagation (BP), the production-level decoding algorithm used for 5G New Radio (NR) LDPC codes. We achieve bit error rate performance that matches a regular Transformer decoder and surpases one iteration BP, also achieving competitive time performance against BP, even for larger block codes. We utilize Sionna, Nvidia's 5G & 6G physical layer research software, for reproducible results.
- North America > Puerto Rico (0.05)
- Europe > Sweden > Stockholm > Stockholm (0.04)
On the Design and Performance of Machine Learning Based Error Correcting Decoders
Yuan, Yuncheng, Scheepers, Péter, Tasiou, Lydia, Gültekin, Yunus Can, Corradi, Federico, Alvarado, Alex
This paper analyzes the design and competitiveness of four neural network (NN) architectures recently proposed as decoders for forward error correction (FEC) codes. We first consider the so-called single-label neural network (SLNN) and the multi-label neural network (MLNN) decoders which have been reported to achieve near maximum likelihood (ML) performance. Here, we show analytically that SLNN and MLNN decoders can always achieve ML performance, regardless of the code dimensions -- although at the cost of computational complexity -- and no training is in fact required. We then turn our attention to two transformer-based decoders: the error correction code transformer (ECCT) and the cross-attention message passing transformer (CrossMPT). We compare their performance against traditional decoders, and show that ordered statistics decoding outperforms these transformer-based decoders. The results in this paper cast serious doubts on the application of NN-based FEC decoders in the short and medium block length regime.